Robert Gibbons (2005) "Incentives Between Firms (and Within)" Management Science, 51,Issue: 1: 2-17 January 2005).


Eric Rasmusen
13 January 2007

This is a very nice survey article on developments in agency contracting theory from the 1990's. Here, I have written some notes on it--notes I never finished up.

Gibbons starts with a linear-contract version of the basic principal-agent model, and then notes that since Mirrlees (1974) we've known nonlinear contracts can usually do better, and that even a simple step-function can often do a lot better. He notes the problem of history-dependent incentives, that if the agent observes how output is proceeding that will affect how hard he works--- it's too easy to earn the bonus if things are going well up to October-- and provides references.

Gibbons cites what sounds like a very good article: Steven Kerr's 1975 "On the folly of rewarding A, while hoping for B." This leads into the Holmstrom-Milgrom (1991) multitasking model.

The next point I found novel was on page 5, where he cites Baker (2002) on why if performance measure p is highly correlated with performance, y, p might still be a bad incentive. Suppose P is this year's earnings, the measure, and Y is the stock price, the goal. Both might be highly correlated with GDP, and so with each other. But if the manager is rewarded for Earnings, he will focus on the short-run, not the long-run, and Stock Price will suffer.

Relational Contracts= Repeated Games

Next, Gibbons talks about Relational Contracts. These are not contracts at all, in the legal sense--- they are agreements that can't be enforced in court, because the relevant variables aren't observed except by the parties, or maybe the parties do not even know the relevant outcomes ex ante. Instead, reputation has to do the work--- that is, repetition of the game. For this, we use models of infinitely repeated games. (On this, see the new 2006 book by Mailth and Samuelson.)

Here's a version of the reputation model.

1. Principal and agent bargain over a wage bonus W for high effort. This is a one-time bargain, not each period. We will model the bargaining as there being a 50% probability each player makes the one take-it-or-leave-it offer to the other player.

2. The agent choose high effort H or low effort 0 for the principal each period.

3. The principal pays the agent W or doesn't pay.

4. The game returns to 2 and repeats forever.

Both players earn zero if no bargain is reached in move 1. If they do reach a bargain, the agent's payoff is his pay minus H for each period of high effort. The principal's payoff is H+S for each period of high effort minus the amount he pays out. Both players use discount rate R and payoffs are received at the start of periods.

The agent must be paid at least W=H more if his effort is High, or he will choose low effort. If effort were contractable, the most the principal would pay is W(high)=H+S. But that would make the principal deviate to paying W(high)= 0 and having the agent switch to low effort. So the highest level of W(high) must be lower.

Let's look at a grim strategy equilibrium, in which the wage for high output is W(high), the principal always pays it, and the agent always chooses high effort, and if either player deviates then the agent chooses zero effort and the principal never pays forever after (so W(low) =0).

In equilibrium, the principal's payoff is

(H+S)-W + [(H+S) - W(high)]/r

If he deviates to not paying, then his payoff is

(H+S)

These are equal at the principal's maximum equilibrium wage:

(H+S) - W(high) + [(H+S) - W(high)]/r = (H+S), so

W(High) = (H+S)/(1+r)

We can distinguish three cases, based on bargaining power. If the principal makes the wage offer, he will set W=H, driving the agent's payoff to zero. If the agent makes the wage offer, he will set W = (H+S)/(1+r). This does not drive the principal's payoff to zero (W=H+S would do that), but it is the highest wage that the principal would pay instead of breaking the agreement. If they have equal bargaining power in the sense of each having a .50 chance of making the offer, then the expected wage is W= [H(1+r) +H+S ]/[2(1+r], halfway between.

This is a case where the bargaining submodel of giving each player a .5 chance of making the offer is especially helpful. It is a structural model, not a reduced-form one, so it gives an unambiguous answer. If, instead, we said that the wage split the surplus evenly, we might have a problem defining "surplus". Is it the difference between (H+S)/(1+r) and H, which is the difference between the offers each side would make, or is it the difference between H+S and H, which is the total gains from trade? I don't know--- so I am thankful for the trick of having each side make the offer with 50% probability.

Actually, real-world informal agreements have the problem of ambiguity too. That's one reason corporate lawyers want the businessmen to put everything into writing-- the businessmen underestimate the probability of confusion. On page 8, Gibbons tells the story of Credit Suisse, which bought First Boston and paid its employees low bonuses. The employees claimed bad faith, but Credit Suisse pointed to First Boston's poor performance. After two years of low bonuses, many of the First Boston investment bankers quit--- which was perhaps not altogether to Credit Suisses's disliking.

Career Concerns: Trying to Impress Employers, and Promotion as Evidence of Ability

The "Career Concerns" literature looks at a different way to self-enforce agreements: using information flow to the outside world. Begin with Holmstrom(1982): a manager will work too hard in his early years, because he wants to impress his principal and the outside world (a hidden information reason), and too little when he is old, for the usual hidden effort reason of being lazy. Gibbons and Murphy (1992) show that the best contract in such a situation gives sharper incentives later in life. I don't know if they find it (having not read the article), but I could imagine that the ideal contract would penalize the manager for high output in the early years, since then he is working so hard just to convey information, not to generate social surplus.

The usual form of such models is where there is more than one firm observing the manager, so he can increase his wage as they find out more. That kind of model could also work, I think, if there was just a monopoly firm, but it is a bilateral monopoly, so the firm and the manager bargain with each other, each getting some of the surplus. The manager would like to impress the firm because that will increase the bargained-for wage.

Another sort of career concerns model, quite different actually, is the idea in Prendergast (1993) that the employer will want to promote an able worker, and since courts (and outside employers) can observe the promotion, that can be contracted upon (or can competitively increase the worker's wage). This is a mechanism design problem if we are talking about designing a contract, since the two players can commit to a mechanism which makes transfers based on their private information and actions plus a public action: whether the worker is promoted.

Thus, suppose there are talented and stupid managers. After the first period, the CEO boss will discover which is which. At the easy task, the managers have marginal products of 80 and 100. At the hard task, the talented manager has a marginal product of 130 and the untalented manager has a marginal product of 0. CEO's compete by offering contracts to managers.

The manager and CEO can contract to pay the manager for two periods: 90 in a period the CEO gives the manager the easy task, or 120 if he gives him the hard task.

In the first period, the CEO will want to put the manager on the easy task, since that yields an expected 0 (.5(80)+.5(100)-89) instead of -55 (.5(0) +.5(130)-120). In the second period, if the manager is talented, the employer could put him on the easy task and pay 90 and get 10 in profit, or put him on the hard task, and get 10 in profit (130-120). For our equilibrium, he will put him on the hard task in that case (otherwise, set the hard task wage equal to 119 to get rid of the indifference).

Notice that the CEO makes a profit of 10 under this contract. So we really should add a signing bonus of 10 for the manager, to take away that profit.

We could also run the model a different way. Suppose the wage CANNOT be contracted on the task, but the outside CEO's can observe which task is assigned. Try the following contract:

The manager agrees to work for the CEO for two periods at a wage of X.

The CEO will assign him to the easy task in the first period. In the second period, he can assign him to the easy task again and pay him 90, or assign him to the hard task and *still* pay him 90. The CEO will of course assign the talented manager to the hard task, at which moment the talented manager's market wage will shoot up to 130.

Some Small Errors Pointed Out and Equations Explained (very rough)

Hubert question: The last paragraph of second column of p9 (The equilibrium first-period effort a1* has intuitive comparative statics). It says that "h0" represents the precision, with a larger "h0" means more precise. However, my understanding is that h0 is the standard deviation of the prior belief of the ability "n". that means, a higher "h0" means that it is LESS accurate. Same argument holds with "h_epsilon". So when it talks about "more precise", should it actually mean that the standard deviation (either h0 or h_epsion) decrease?

Eric answer: Gibbons seems to ahve a mistake here. I'll explain what he's doing, and where he slipped.

What we want to model here is rational learning when the purchaser has a prior (m0, with sigma^2 variance of h0) and one observation (y1, with variance hepsilon), and the true value is the ability, eta. y1 is related to eta by

(1) y1= eta + a1 +epsilon1 (an equation like Gibbons' on p. 9, left column)

so

(2) eta = y1-a1 - epsilon1

We need to use Bayes rule. For just one value of eta,

(3) Probability (eta|y1) = Prob (y1|eta) Prob (eta) /Prob(y1)

But we are interested in the purchaser's *expected value* of eta given that he sees y1. That's what the purchaser will set equal to the wage. So we need to integrate (3) times eta over all eta, to get E(eta|y1).

That's what DeGroot did. This is the Bayesian inference problem with a normal conjugate prior and normal noise. It turns out that

(4) E eta|y1 = [ h_epsilon*m0 + h0(y1-a1)][ h_epsilon+ ho]

(4 is so neat because Prob (y1|eta) and Prob (eta) in (3) both are normal, and the exponents-- which invovle the variances-- multiply together nicely)

I didn't remember (4), though I've seen it before. For (4), I checked

Laskey notes, p. 37. http://ite.gmu.edu/~klaskey/SYST664/Bayes_Unit3.pdf

other notes: http://www.math.uah.edu/stat/point/Bayes.xhtml

That's where Gibbons went wrong. (4) is peculiar because it is the variance of the OBSERVATION that multiplies the prior mean m0, not the variance of the prior.

Of course, the intuition makes more sense using (4). The estimate is a weighted average of the datum and the prior mean, with the prior mean weighted higher if the datum is highly variable.

Also, here, we say (y1-alpha1) rather than just y1, because the purchaser believes that y1 is biased away from eta by amount alpha1

The next equation in Gibbons doesn't involve that stuff directly. It just says the payoff is

(5) [w1-c(a1)] + ...

the sum of the first period payoff and the second period, where the supplier is trying to estimate the second period wage. He knows it will equal the buyer's E(eta|y1). The supplier uses a1, because he knows the truth of it, since he picks it.

Maximizing (5), the supplier gets FOC

(6) c'(a1) + delta dEw2/da1 = 0,

That derivative, is, from (4)

dEw2/da1 = h0/[ h_epsilon+ ho],

so

(7) c'(a1) = delta h0/[ h_epsilon+ h_o]

which Gibbons gets wrong because of his earlier error.

Then, the comparative statics make sense.

The error also extends to the bottom equation, for T periods. It should be

c'(a1) = Sum delta^{s-1}h0/[h0 + (s-1)h_epsilon]

The bigger is s, the more data there is, so the data comes to swamp the prior mean. Also, c'(a1) equals this because that a1 affects all the future estimates, so it affects all future wages.

Hubert question: Section 5B - Relational Contracts Between or Within Firms 1. For the middle paragraph of first column of p.15 starting with "When the downstream party owns the assets...". It has a formula "S^SO> S^SE". However, they did not define S^SE. I assume that it should be the S^R that they defined 10 lines above. Am I correct?

Eric answer: S^SE should always be S^SI. It's referring to the assumption that S^SI> S^SO made in the middle of the left column of p. 15,

Because of that assumption, the Coase Theorem says that the asset will be owned by the upstream player at the end of the day.

Hubert: Just a note: equation (1) (finally they have number for formula!) it should be "-pi" instead of "pi".

Eric: You are right. And I am relieved to see an equation number too.

 

 

 

 

References

Robert Gibbons (2005) "Incentives between Firms (and within)" Management Science, 51,Issue: 1: 2-17 January 2005).

Gibbons, R., K. J. Murphy. 1992. Optimal incentive contracts in the presence of career concerns: Theory and evidence. J. Political Econom. 100 468-505. Rev. 90 1346-1361.

Holmstrom, B. 1982. Managerial incentive problems-A dynamic perspective. Essays in Economics and Management in Honor of Lars Wahlbeck. Swedish School of Economics, Helsinki, Sweden.

Kerr, Steven, 1975. "On the folly of rewarding A, while hoping for B." Acad. Management J. 18 769-783.

George J. Mailath and Larry Samuelson, Repeated Games and Reputations Long-Run Relationships, Oxford U. Press, October 2006.

Prendergast, C. 1993. The role of promotion in inducing specific human capital acquisition. Quart. J. Econom. 108 523-534.

Jean Tirole (1988), The Theory of Industrial Organization, Cambridge: MIT Press, 1988.